678 research outputs found

    Actively Secure Garbled Circuits with Constant Communication Overhead in the Plain Model

    Get PDF
    We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first protocol that has only a constant multiplicative communication overhead compared to Yao\u27s protocol for passive adversaries, and can be implemented in the plain model by only making a black-box use of (parallel) oblivious transfer and a pseudo-random generator. This improves over the polylogarithmic overhead of the previous best protocol. A similar result could previously be obtained only in an amortized setting, using preprocessing, or by assuming bit-oblivious-transfer as an ideal primitive that has a constant cost. We present two variants of this result, one which is aimed at minimizing the number of oblivious transfers and another which is aimed at optimizing concrete efficiency. Our protocols are based on a novel combination of previous techniques together with a new efficient protocol to certify that pairs of strings transmitted via oblivious transfer satisfy a global relation. Settling for ``security with correlated abort\u27\u27, the concrete communication complexity of the second variant of our protocol can beat the best previous protocols with the same kind of security even for realistic values of the circuit size and the security parameter. This variant is particularly attractive in the offline-online setting, where the online cost is dominated by a single evaluation of an authenticated garbled circuit, and can also be made non-interactive using the Fiat-Shamir heuristic

    On the Communication Complexity of Secure Computation

    Full text link
    Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.Comment: 37 page

    Reusable Non-Interactive Secure Computation

    Get PDF
    We consider the problem of Non-Interactive Secure Computation (NISC), a 2-message ``Sender-Receiver\u27\u27 secure computation protocol that retains its security even when both parties can be malicious. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice. Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver\u27s first message is reused. Motivated by the failure of the OT-based approach, we consider the problem of basing \emph{reusable} NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results: - We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. As a corollary, we get reusable NISC/OLE for general Boolean circuits using any one-way function. - We complement this by a negative result, showing that reusable NISC/OT is impossible to achieve, and a more restricted negative result for the case of the zero-knowledge functionality. This provides a formal justification for the need to replace OT by OLE. - We build a universally composable 2-message OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008). - By combining our NISC/OLE protocol and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols where following a statement-independent preprocessing, both proving and verifying are entirely ``non-cryptographic\u27\u27 and involve only a constant computational overhead

    Privacy-Preserving Distance Computation and Proximity Testing on Earth, Done Right

    Get PDF
    In recent years, the availability of GPS-enabled smartphones have made location-based services extremely popular. A multitude of applications rely on location information to provide a wide range of services. Location information is, however, extremely sensitive and can be easily abused. In this paper, we introduce the first protocols for secure computation of distance and for proximity testing over a sphere. Our secure distance protocols allow two parties, Alice and Bob, to determine their mutual distance without disclosing any additional information about their location. Through our secure proximity testing protocols, Alice only learns if Bob is in close proximity, i.e., within some arbitrary distance. Our techniques rely on three different representations of Earth, which provide different trade-os between accuracy and performance. We show, via experiments on a prototype implementation, that our protocols are practical on resource- constrained smartphone devices. Our distance computation protocols runs, in fact, in 54 to 78 ms on a commodity Android smartphone. Similarly, our proximity tests require between 1.2 s and 2.8 s on the same platform. The imprecision introduced by our protocols is very small, i.e., between 0.1% and 3% on average, depending on the distance

    Using Fully Homomorphic Hybrid Encryption to Minimize Non-interative Zero-Knowledge Proofs

    Get PDF
    A non-interactive zero-knowledge (NIZK) proof can be used to demonstrate the truth of a statement without revealing anything else. It has been shown under standard cryptographic assumptions that NIZK proofs of membership exist for all languages in NP. While there is evidence that such proofs cannot be much shorter than the corresponding membership witnesses, all known NIZK proofs for NP languages are considerably longer than the witnesses. Soon after Gentry’s construction of fully homomorphic encryption, several groups independently contemplated the use of hybrid encryption to optimize the size of NIZK proofs and discussed this idea within the cryptographic community. This article formally explores this idea of using fully homomorphic hybrid encryption to optimize NIZK proofs and other related cryptographic primitives. We investigate the question of minimizing the communication overhead of NIZK proofs for NP and show that if fully homomorphic encryption exists then it is possible to get proofs that are roughly of the same size as the witnesses. Our technique consists in constructing a fully homomorphic hybrid encryption scheme with ciphertext size |m|+poly(k), where m is the plaintext and k is the security parameter. Encrypting the witness for an NP-statement allows us to evaluate the NP-relation in a communication-efficient manner. We apply this technique to both standard non-interactive zero-knowledge proofs and to universally composable non-interactive zero-knowledge proofs. The technique can also be applied outside the realm of non-interactive zero-knowledge proofs, for instance to get witness-size interactive zero-knowledge proofs in the plain model without any setup or to minimize the communication in secure computation protocols

    Computing on Encrypted Data

    Full text link
    Abstract. Encryption secures our stored data but seems to make it in-ert. Can we process encrypted data without having to decrypt it first? Answers to this fundamental question give rise to a wide variety of appli-cations. Here, we explore this question in a number of settings, focusing on how interaction and secure hardware can help us compute on en-crypted data, and what can be done if we have neither interaction nor secure hardware at our disposal.

    Two-Round MPC: Information-Theoretic and Black-Box

    Get PDF
    We continue the study of protocols for secure multiparty computation (MPC) that require only two rounds of interaction. The recent works of Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) essentially settle the question by showing that such protocols are implied by the minimal assumption that a two-round oblivious transfer (OT) protocol exists. However, these protocols inherently make a non-black-box use of the underlying OT protocol, which results in poor concrete efficiency. Moreover, no analogous result was known in the information-theoretic setting, or alternatively based on one-way functions, given an OT correlations setup or an honest majority. Motivated by these limitations, we study the possibility of obtaining information-theoretic and ``black-box\u27\u27 implementations of two-round MPC protocols. We obtain the following results: - Two-round MPC from OT correlations. Given an OT correlations setup, we get protocols that make a black-box use of a pseudorandom generator (PRG) and are secure against a malicious adversary corrupting an arbitrary number of parties. For a semi-honest adversary, we get similar information-theoretic protocols for branching programs. - New NIOT constructions. Towards realizing OT correlations, we extend the DDH-based non-interactive OT (NIOT) protocol of Bellare and Micali (Crypto \u2789) to the malicious security model, and present new NIOT constructions from the Quadratic Residuosity Assumption (QRA) and the Learning With Errors (LWE) assumption. - Two-round black-box MPC with strong PKI setup. Combining the two previous results, we get two-round MPC protocols that make a black-box use of any DDH-hard or QRA-hard group. The protocols can offer security against a malicious adversary, and require a PKI setup that depends on the number of parties and the size of computation, but not on the inputs or the identities of the participating parties. - Two-round honest-majority MPC from secure channels. Given secure point-to-point channels, we get protocols that make a black-box use of a pseudorandom generator (PRG), as well as information-theoretic protocols for branching programs. These protocols can tolerate a semi-honest adversary corrupting a strict minority of the parties, where in the information-theoretic case the complexity is quasi-polynomial in the number of parties

    Two Round Information-Theoretic MPC with Malicious Security

    Get PDF
    We provide the first constructions of two round information-theoretic (IT) secure multiparty computation (MPC) protocols in the plain model that tolerate any t<n/2t<n/2 malicious corruptions. Our protocols satisfy the strongest achievable standard notions of security in two rounds in different communication models. Previously, IT-MPC protocols in the plain model either required a larger number of rounds, or a smaller minority of corruptions

    Local deterministic model of singlet state correlations based on relaxing measurement independence

    Full text link
    The derivation of Bell inequalities requires an assumption of measurement independence, related to the amount of free will experimenters have in choosing measurement settings. Violation of these inequalities by singlet state correlations, as has been experimentally observed, brings this assumption into question. A simple measure of the degree of measurement independence is defined for correlation models, and it is shown that all spin correlations of a singlet state can be modeled via giving up a fraction of just 14% of measurement independence. The underlying model is deterministic and no-signalling. It may thus be favourably compared with other underlying models of the singlet state, which require maximum indeterminism or maximum signalling. A local deterministic model is also given that achieves the maximum possible violation of the well known Bell-CHSH inequality, at a cost of only 1/3 of measurement independence.Comment: Title updated to match published versio
    • …
    corecore